Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Monday June 05 2017, @10:15AM   Printer-friendly
from the git-gud dept.

The Open Source Survey asked a broad array of questions. One that caught my eye was about problems people encounter when working with, or contributing to, open source projects. An incredible 93 percent of people reported being frustrated with “incomplete or confusing documentation”.

That’s hardly a surprise. There are a lot of projects on Github with the sparsest of descriptions, and scant instruction on how to use them. If you aren’t clever enough to figure it out for yourself, tough.

[...] According to the Github Open Source Survey, 60 percent of contributors rarely or never contribute to documentation. And that’s fine.

Documenting software is extremely difficult. People go to university to learn to become technical writers, spending thousands of dollars, and several years of their life. It’s not really reasonable to expect every developer to know how to do it, and do it well.

2017 Open Source Survey

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Funny) by DannyB on Monday June 05 2017, @03:44PM (12 children)

    by DannyB (5839) Subscriber Badge on Monday June 05 2017, @03:44PM (#520797) Journal

    Crappy documentation has been around for decades. Nothing new ;-)

    Good documentation has been around for decades too.

    But it was three feet thick. And bolted to a table. Documentation could not physically be moved to another table, let alone taken from the computer room. You had to memorize three feet of documentation. There was no GUI. It was uphill both ways. Hey you kids, get off my lawn!

    --
    People today are educated enough to repeat what they are taught but not to question what they are taught.
    Starting Score:    1  point
    Moderation   +1  
       Funny=1, Total=1
    Extra 'Funny' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by VLM on Monday June 05 2017, @05:20PM (11 children)

    by VLM (445) Subscriber Badge on Monday June 05 2017, @05:20PM (#520853)

    Like you I miss the pre-internet days.

    I think you're mixing your memes, I had momentary access to VMS and a DEC wall and a DEC wall was about 3 feet of color coded binders and supposedly it made sense after you memorized all three feet. All I remember was the CLI for VMS was something like DCL DEC control language or something and it had a fabulous CLI with respect to consistency and online help. I had much longer term access to sunos (like pre-solaris unix) and hp--(s)-ux and for one of them the machine room did have a users manual that boiled down to a printout of all the man pages in the form of a 4 inch thick phonebook and it was on a stand bolted to the machine room desk console. Supposedly that manual cost $1000 or so they told us, probably to stop us from ripping out pages.

    What I miss about pre-internet days was when IBM or DEC shipped you a set of binders, thats honestly all you needed to know. You didn't have to google and find 200 sources and figure out which are clickbait and which are real.

    If you had a modern 90s CMOS mainframe from IBM I forget the name but I remember reading an entire ATM training manual of a couple hundred papers in an emulated 3270 terminal on roughly a 386 in about '95 or '96. I've never experienced anything quite like it since. I've seen the occasional "here's a pdf best of luck" as docs but IBM had amazing detailed manuals that all worked together.

    As for retro stuff, DECs PDP8 book series of manuals was simply amazing never seen anything like it since from a single source. Imagine all of 80s home computer tech times 100 or so all single sourced. Great manual series.

    Some other retro DEC stuff that was decent was the TOPS-10 manual series. Supposedly TOPS-20 was a better OS with inferior docs. TOPS-10 was a cool OS.

    • (Score: 2) by DannyB on Monday June 05 2017, @05:53PM (10 children)

      by DannyB (5839) Subscriber Badge on Monday June 05 2017, @05:53PM (#520865) Journal

      Back when I was a snot nosed 20 year old kid, I was able to brain download manuals. Much later, in about 1986 (as a Mac developer) I discovered Lisp. There was this XLisp I could download from CompuServe. Later I spent $495 (in 1989 dollars) on Macintosh Common Lisp. Used it until about 1993. Loved it. Still have every line of code I wrote. (Now I like Clojure.) I had bought my own copy of CLtL and CLtL2 [cmu.edu]; and brain downloaded those. They were well thumbed.

      For the youngsters, CompuServe is something from a different millennium. Before web sites. Before dial up internet. Before Usenet was popular. Even before AOL. Back when modems were 2400 bits/second as God intended.

      --
      People today are educated enough to repeat what they are taught but not to question what they are taught.
      • (Score: 2) by VLM on Monday June 05 2017, @06:19PM

        by VLM (445) Subscriber Badge on Monday June 05 2017, @06:19PM (#520884)

        Back when modems were 2400 bits/second as God intended.

        Noobs. My dad and grandfather has TRS-80s with 300 baud modems. Eventually my dad upgrades to a TRS80 4p with internal 1200 baud modem, which was pretty fast. Radio Shack used to include like "free 5 hours of compuserve" little carbon paper free accounts with all manner of products.

        Believe it or not I was BBS-ing in '87 or '88 with a 1200 baud modem I got from the "free" bin at a local ham radio fest (back in those days, radio fests had a lot of computers). I believe in Christmas of '89 I got a 2400 baud modem, doubling my speed was quite nice. That served until I blew like $500 in '92 or '93 and got a 14.4K and around then internet access was becoming a thing so I got a SLIP account with a dedicated static address, pretty cool.

      • (Score: 2) by kaszz on Tuesday June 06 2017, @12:06AM (8 children)

        by kaszz (4211) on Tuesday June 06 2017, @12:06AM (#521056) Journal

        Which of the LISPs are worthwhile btw? as in being powerful to do whatever one might wish without over complicating matters?
          * Erlang
          * Common Lisp
          * Clojure
          * Haskell

        Compuserve was like a BBS that were powered by mainframes connected worldwide right?
        Usenet.. killed by eternal autumn and lawyers peddling green cards. A lot of the big hosts handling it went away and others could not pick it up because the share volume and processing involved. But now capacity is cheap so maybe a revival is possible?

        • (Score: 2) by hendrikboom on Tuesday June 06 2017, @02:08AM (6 children)

          by hendrikboom (1125) Subscriber Badge on Tuesday June 06 2017, @02:08AM (#521109) Homepage Journal

          There are two interesting Lisps; both of them are Scheme implementations.

          (1) Racket: Racket's specialty is multilingual programming. You can define new syntaxes and new semantics (i.e. new languages) and compose a program from modules written in those languages. The feature seems to cover everything from minor tweaks to completely different languages. For example, you can write modules in Algol 60 or in Scribble. Scribble is a notation for Scheme that looks and feels like a text formatting markup language.

          And Racket has excellent tutorials and documentation and a supportive mailing list, to return this discussion to the original subject.

          (2) Gambit. Gambit is a Scheme implementation that compiles directly to C (or C++) (Or you can just use the interpreter). Its virtue is that you can actually introduce new features and specify just what C code is to be generated from it. As a bonus you get a rather flexible scripting language. And you get to use low-level Cisms when you want.

          -- hendrik

          • (Score: 2) by kaszz on Tuesday June 06 2017, @03:20AM (5 children)

            by kaszz (4211) on Tuesday June 06 2017, @03:20AM (#521137) Journal

            Do any of these LISPs make efficient use of multi cores that are common now, but were not when those languages came into being? They should be able to exploit that even better than many other languages simple by their nature.

            • (Score: 2) by DannyB on Tuesday June 06 2017, @02:17PM (1 child)

              by DannyB (5839) Subscriber Badge on Tuesday June 06 2017, @02:17PM (#521328) Journal

              I would agree that Racket and Gambit are interesting. If you are interested in Scheme. There is also Chez Scheme, which was commercial, but then became open source.
              https://en.wikipedia.org/wiki/Chez_Scheme [wikipedia.org]
              http://www.scheme.com/ [scheme.com]
              https://github.com/cisco/ChezScheme [github.com]

              Incremental native-code compilation to produce native binaries for the PowerPC, SPARC, IA-32, and x86-64 processor architectures. Chez Scheme supports R6RS.

              Chez Scheme has a windowing and graphics package called the Scheme Widget Library, and is supported by the portable SLIB library.

              I like Clojure. (clojure.org) Clojure is a modern lisp. It has reach, it runs on: JVM, JavaScript, .NET. Clojure runs on JVM and .NET, while ClojureScript compiles to JavaScript and runs in-browser or other JavaScript implementations.

              Momentary sidetrack on JVM (Java Virtual Machine) . . .
              JVM is the runtime engine that runs Java Bytecode which is emitted from the Java compiler, Clojure compiler, and other language compilers (Kotlin, Scala, Groovy, Jython, JRuby etc).
              JVM is interesting because it is an industrial strength runtime platform. A decade and a half of research has gone into JVM's JIT compilers C1 and C2, and it's multiple GC's. When you run JVM you can select "server" mode or "client" mode (eg, tune it for running on a workstation or on a server). You have a choice of GC algorithms to pick from, and gobs of GC tuning parameters. If you simply give it plenty of RAM, JVM is very fast. The GC's run in parallel on multiple CPU cores. So if you throw plenty of CPU cores at it, you may never see any GC pause times.

              You can get multiple implementations of JVM. Oracle. Or Azul's Zing -- which is a free binary build (with source) based on the open source OpenJDK. (Also Oracle's Java is based on OpenJDK) If you're on IBM mainframe hardware, then IBM provides Java runtime. Java runtime is available on Raspberry Pi. Languages that compile to JVM can be used for Android development (which compiles JVM bytecode into Dalvik bytecode to run on Android).

              Java (or rather JVM and various languages) are popular in enterprise software. Java is used for high frequency trading (yes really).

              If you need a JVM that runs on hundreds of GB of RAM, and with up to 768 cpu cores, then look at Azul Zing on Linux.

              We now return to Clojure . . .

              Without being grandiose, Clojure runs on an ordinary desktop java runtime, even on a humble Raspberry Pi.

              Clojure is functional programming.

              All data in Clojure is immutable. That means you can't alter anything in a data structure. If I have an array of 50 million elements, and I change element 1,253,507; I get back a new array with that element changed. The original array is unaltered. Yet the performance guarantee is "close to" what you expect of arrays for indexed access. How is this magic implemented behind the scenes? With 32-way trees. When you changed element 1,253,507, a certain leaf node in the tree was changed. That new node, along with new nodes all the way up to a new root node become the new array. The tree shares structure with all other tree elements of the old array. Thus only a few tree nodes are recreated at a cost of Log32(n). So it's close to direct indexed performance for the huge benefit of immutable data structures. This means there is no such thing as: (SETF (CADR x) 'foo) That would be trying to surgically modify a data structure. There are similar operations that can do this, but they re-construct the portions of the data structure, which ultimately makes use of the underlying (invisible to you) implementation of immutability.

              All variables are final. (Meaning, once a variable is assigned a value it cannot be changed to a different value.)

              This may sound restricting, but Clojure definitely has the right constructs to give it a functional style that mitigates what you might think of as restrictions. The end result is better quality code. You can see Haskell inspiration.

              Clojure has primitive types for Lists, Arrays, Sets and Hash Maps. Lists are what you already know. Arrays are what you expect. You can alter any element in the array by direct index, far cheaper than in a list, but with the guarantees of immutability of the original array (you get back a new array with the single element altered). Sets are what you expect. Sort of like an array or hash map, an item can only occur once in the set. Hash Maps are key-value stores like in many modern languages, but backed by an implementation that uses different backing for different sized hash maps. A small hash map of a dozen items will not use hashing, but merely be internally an array where the key array is searched for the key. At some invisible magic threshold the underlying implementation becomes a hash map. Clojure has lots of hidden implementation optimizations, like for short lists, arrays, etc. Implementations of functions with certain common parameters.

              Clojure has excellent Java interoperability (when used on Java). Think of this like other lisps having an FFI to C. Clojure and Java (or other JVM language) code is fully interoperable. Clojure data structures can be passed to Java and manipulated or even created in Java by importing the Clojure imports into Java code to be able to access the Clojure data structure APIs. Java data structures can be passed to Clojure and accessed with dot notation to access methods and class members. You can write JVM classes in Clojure for compatibility with other Java code. For example, if using a Java library that needs to be passed a callback function, it is easy to write this callback function in Clojure without using any Java.

              Because it runs on JVM, Clojure has access to the EMBARRASSINGLY large amount of riches of Java libraries. Code to do anything under the sun. Including access to the GPU. And there are Clojure libraries for working with the GPU.

              Clojure has an excellent story about concurrency. I won't go into it here. But you can write high level code that will run on all your CPU cores. In Clojure, MAP is like MAPCAR in CL. Instead of using "map", you can use "pmap" to process concurrently if you don't care about the order of the returned elements. (eg, I have a list of ten million items, and I need to run function F on all of them, I can use (pmap F items) if I don't care that the returned list is in a different order than items. Watch all your CPU cores light up.)

              Clojure has a nice syntax for Arrays, Sets and HashMaps. Thus you can render arbitrarily complex data structures to text, and then back to internal data again. Think JSON but much better.

              You'd be amazed with what people do in Clojure because of Java interop. Playing with MIDI and synthesizers, even software synthesizers. See Clojure's interface to Overtone. Using Clojure with OpenCV (computer vision).

              Let me throw out another interesting one.

              Pixie Lisp

              It compiles to native code. Built with the Rpython tool chain. Has good C FFI because of this. Early implementations on Raspberry Pi with direct Lisp access to WiringPi which provides access to the hardware GPIO pins (digital / analog / SPI / I2C / PWM input and output pins).

              Pixle lisp is also Clojure inspired.

              Finally, let me mention: Shen, a sufficiently advanced Lisp.
              (to be indistinguishable from Magic. See that it is like having Haskell and Prolog baked into the Lisp itself.)

              Hope that helps.

              --
              People today are educated enough to repeat what they are taught but not to question what they are taught.
              • (Score: 2) by DannyB on Tuesday June 06 2017, @02:19PM

                by DannyB (5839) Subscriber Badge on Tuesday June 06 2017, @02:19PM (#521329) Journal

                Duh. "Or Azul's Zing -- which is a free binary build" I meant Azul Zulu.

                Zing is the one that runs on hundreds of GB of ram with up to 768 cpu cores.

                --
                People today are educated enough to repeat what they are taught but not to question what they are taught.
            • (Score: 2) by VLM on Tuesday June 06 2017, @03:59PM (1 child)

              by VLM (445) Subscriber Badge on Tuesday June 06 2017, @03:59PM (#521385)

              There are no obvious factual errors in Danny's reply to the best of my knowledge at this time.

              A very short answer to your very specific question in a limited sense is "yes"

              A longer answer to that specific question including plenty of low level implementation examples is at:

              http://www.braveclojure.com/concurrency/ [braveclojure.com]

              Clojure for the Brave and True is a bit of an acquired taste but I figured it would be more fun than a link into clojuredocs.org. clojuredocs is best used if you already know you want a future and you merely forgot some detail about futures.

              Its not a magic or magic-ish language like Erlang you kinda have to intentionally parallelize stuff in Clojure for it to run multi-core but its not hard either and there's plenty of support.

              It works.

              • (Score: 2) by DannyB on Tuesday June 06 2017, @04:39PM

                by DannyB (5839) Subscriber Badge on Tuesday June 06 2017, @04:39PM (#521409) Journal

                This [objectcomputing.com] is an intro that I found useful as an into to Clojure, a long time ago.

                I think I remember an amusing YouTube video where Rich Hickey mentions running some cool core.async program on Azul's custom hardware with hundreds of cpu cores.

                I am excited to try Pixie Lisp on a Pi. So many projects to tinker with. So little time. Ugh.

                --
                People today are educated enough to repeat what they are taught but not to question what they are taught.
            • (Score: 2) by hendrikboom on Saturday June 10 2017, @01:22AM

              by hendrikboom (1125) Subscriber Badge on Saturday June 10 2017, @01:22AM (#523345) Homepage Journal

              I believe Gambit has a multicore implementation in beta testing -- including multicore garbage collection, But not garbage collection running in parallel with program execution.

              Racket may well be working on something similar, but I don't remember the details.

              -- hendrik

        • (Score: 0) by Anonymous Coward on Tuesday June 06 2017, @10:04AM

          by Anonymous Coward on Tuesday June 06 2017, @10:04AM (#521242)

          Might be minus the -.

          What killed usenet wasn't the volume per-se, it was the BINARIES being pushed to newgroups, even non-binary usegroups once ISPs started filtering those. There was a chart up a few years back showing usenet daily capacity from the 80s to the early-mid '00s. The content amount went from a few gigabytes a day to tens of gigabytes a day, to a few hundred gigabytes a day, and by the '00s, was into the terabytes a day of traffic volume. Yes. Terabytes. In an era when drives were still sub-terabyte, and backbone capacities were probably below... OC48(?) with ISPs having OC3 or competitive being reasonably high end for non-corporate local ISPs.

          THAT, combined with spam is what killed Usenet. It could be reestablished today, but there are fundamental changes to the protocol that should be made to help synchronize mirroring of chains of posts, modifications rather than reposts for changed posts, and better peering of new blocks of messages (nntp was point to point, which is reponsible for a lot of the traffic overhead. It was okay in the UUCP era, but nowadays something bittorrent-esque for helping spread publishes across multiple fast hosts could help immensely.)