Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 11 submissions in the queue.
posted by janrinok on Thursday June 25 2015, @06:07PM   Printer-friendly
from the one-step-at-a-time dept.

Nvida's latest mark of their newly discovered open-source kindness is beginning to provide open-source hardware reference headers for their latest GK20A/GM20B Tegra GPUs while they are working to also provide hardware header files on their older GPUs. These programming header files in turn will help the development of the open-source Nouveau video driver as up to this point they have had to do much of the development via reverse-engineering.

In order to drive Nouveau as NVIDIA's primary development environment for Tegra, they are looking at adding "official" hardware reference headers to Nouveau. Ken explained, " The headers are derived from the information we use internally. I have arranged the definitions such that the similarities and differences between GPUs is made explicit. I am happy to explain the rationale for any design choices and since I wrote the generator I am able to tweak them in almost any way the community prefers."

So far he has been cleared to provide the programming headers for the GK20A and GM20B. For those concerned this is just an item for driving up future Tegra sales, Ken added, "over the long-term I'm confident any information we need to fill-in functionality >= NV50/G80 will be made public eventually. We just need to go through the internal steps necessary to make that happen."

Perhaps most interesting is that moving forward they would like to use the Nouveau kernel driver code-base as the primary development environment for new hardware. In 2012 Torvalds sent a public "fuck you!" to Nvidia. Also, don't forget Intel and AMD offerings.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by Nerdfest on Thursday June 25 2015, @06:33PM

    by Nerdfest (80) on Thursday June 25 2015, @06:33PM (#201144)

    For those concerned this is just an item for driving up future Tegra sales

    Who cares? As long as they actually do it, I hope it does drive up their sales. It will certainly affect my video card selection to a degree.

    • (Score: 2) by kaszz on Thursday June 25 2015, @07:04PM

      by kaszz (4211) on Thursday June 25 2015, @07:04PM (#201161) Journal

      No luck with those older cards then..

      • (Score: 2) by maxwell demon on Thursday June 25 2015, @07:09PM

        by maxwell demon (1608) on Thursday June 25 2015, @07:09PM (#201165) Journal

        That's not a given. If availability of information for old cards affects sales of new cards (because it goes into the decision of those buying it), you can expect the information for older cards, too.

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by kaszz on Thursday June 25 2015, @10:29PM

          by kaszz (4211) on Thursday June 25 2015, @10:29PM (#201261) Journal

          You mean by withholding information on older cards they can make people to throw them away and buy new? :p

          • (Score: 0) by Anonymous Coward on Friday June 26 2015, @04:21PM

            by Anonymous Coward on Friday June 26 2015, @04:21PM (#201570)

            If it makes them throw away old NVidia cards and buy new AMD cards, it doesn't help NVidia the slightest.

  • (Score: 2) by Freeman on Thursday June 25 2015, @07:04PM

    by Freeman (732) on Thursday June 25 2015, @07:04PM (#201162) Journal

    It would be awesome, if Nvidia / AMD / Intel made their Video Drivers Open Source. I'm sure they could do it in a way that protected their IP. I think the main issue is that Linux isn't big enough for them to care. Most System Administrators don't care about whatever POC video card is in their server, so long as it works. Though, even that's debatable.

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 0) by Anonymous Coward on Thursday June 25 2015, @07:40PM

      by Anonymous Coward on Thursday June 25 2015, @07:40PM (#201193)

      Most System Administrators don't care about whatever POC video card is in their server

      That is true for the 'service' sort of installations. But you get to something like a rendering farm? Oh they care. They care a great deal. They are also dropping multi millions of dollars... In the sorts of installations you are talking about whichever one uses the least amount of power wins.

      My guess is it can not be to non obvious but steam is changing the game. The steam guys have said multiple times how intels open source has directly helped them and how they contributed back.

    • (Score: 2) by kaszz on Thursday June 25 2015, @10:27PM

      by kaszz (4211) on Thursday June 25 2015, @10:27PM (#201259) Journal

      What do you mean by POC ?

      • (Score: 2) by jimshatt on Thursday June 25 2015, @10:50PM

        by jimshatt (978) on Thursday June 25 2015, @10:50PM (#201271) Journal
        Piece of crêpe.
      • (Score: 2) by Freeman on Thursday June 25 2015, @10:52PM

        by Freeman (732) on Thursday June 25 2015, @10:52PM (#201274) Journal

        Piece of Crap. Can be applied to something that works or doesn't. It could also mean Pile of, but that would be reserved for something more complex, like Windows.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
        • (Score: 2) by kaszz on Thursday June 25 2015, @11:27PM

          by kaszz (4211) on Thursday June 25 2015, @11:27PM (#201293) Journal

          Pile Of Nuclear Waste? :p

          For using a say a server as a remote box or computing cluster. The video card just better be able to handle VGA without a fuss and then enter sleep mode to not pester the system buses or power supply. If not, I suspect that machine or card would hit the RMA department like thunder. And make the card manufacturer banned.

    • (Score: 3, Informative) by TheRaven on Friday June 26 2015, @09:47AM

      by TheRaven (270) on Friday June 26 2015, @09:47AM (#201452) Journal

      AMD and Intel do (AMD currently maintains open source and proprietary drivers, but is slowly merging the lines to reduce their costs). nVidia might follow in the next few years. They're already using a lot of open source pieces. A modern GPU driver is basically a little bit of kernel code to do memory management (allocate DMA buffers and command queues to userspace) and then a massive compiler. AMD is benefitting a lot from having their GPU back end in the LLVM tree - it means that they can always use the latest LLVM release in their drivers and take advantage of new target-agnostic optimisations. nVidia uses a hacked-up LLVM branched from an old version for mid-level optimisations and then their own code generation layer. Intel keeps their back end out of tree, but does fairly regular merges.

      Note that this is about mobile GPUs, where source availability often is an issue, to ODMs if not to end users. If you're building an embedded device then you want to make sure that you can provide driver updates when kernel / userspace APIs and ABIs change

      --
      sudo mod me up
  • (Score: 5, Interesting) by MichaelDavidCrawford on Thursday June 25 2015, @07:40PM

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Thursday June 25 2015, @07:40PM (#201192) Homepage Journal

    it was to work on cuda. computational physics was the focus of my degree. i taught it at caltech then did data analysis at cern. so i was very eager to discuss numerical solutions to differential equations and the like.

    instead they asked me one single logic puzzle that had me totally stymied. that question had nothing whatsoever to do with physics or device drivers.

    I got a job doing raid drivers because the interviewers at amcc asked me questions that were relevant to the job.

    --
    Yes I Have No Bananas. [gofundme.com]
    • (Score: 2) by Alfred on Thursday June 25 2015, @07:48PM

      by Alfred (4006) on Thursday June 25 2015, @07:48PM (#201194) Journal
      I like reading puzzles too hard for me to solve. What was the problem? Did they show it was actually solvable?
      • (Score: 2) by The Archon V2.0 on Thursday June 25 2015, @07:58PM

        by The Archon V2.0 (3887) on Thursday June 25 2015, @07:58PM (#201200)

        > Did they show it was actually solvable?

        Since it sounds like an interview rigged up by an HR type and not a coding type, I'd bet that all they could possibly show is the answer they got off the Internet.

      • (Score: -1, Troll) by Anonymous Coward on Thursday June 25 2015, @08:16PM

        by Anonymous Coward on Thursday June 25 2015, @08:16PM (#201209)

        > I like reading puzzles too hard for me to solve. What was the problem? Did they show it was actually solvable?

        Haven't you figured it out yet? MDC is a pathological liar. His life is full of adventure and variety -- He taught at caltech, worked at cern, dated gabe newells sister and a thousand other things that I would remember if I cared enough to catalog his crazy -- all the while having time to constantly to post about all his amazing experiences on webforums. When does the credulity run out?

        • (Score: 3, Funny) by Gaaark on Thursday June 25 2015, @08:52PM

          by Gaaark (41) on Thursday June 25 2015, @08:52PM (#201223) Journal

          ...but he was great in "Some mothers do 'ave 'em"... :)

          --
          --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
          • (Score: 2) by MichaelDavidCrawford on Friday June 26 2015, @12:10AM

            "Does your husband/mother know that you're writing to me?". i mean likenine year old girls would offer their maidenhead to me.

            "Will you sing at my daughters edding"

            "I dont know how to sing but i play improvisational piano."

            "Thats ok - come sing anyway."

            When Michael Patrick Dumble-Smythe retired he moved into my aunt's neighborhood so british people would not know where to find him.

            He is the specific reason i use my full name online. really i just prefer mike.

            --
            Yes I Have No Bananas. [gofundme.com]
      • (Score: 2) by MichaelDavidCrawford on Friday June 26 2015, @12:16AM

        I regard it as bad form to tell anyone what my job interview questions are, even when they are totally asinine.

        While it wasnt chess consider a chess problem. what move is required so your opponent cannot checkmate you?

        --
        Yes I Have No Bananas. [gofundme.com]
        • (Score: 0) by Anonymous Coward on Friday June 26 2015, @03:37AM

          by Anonymous Coward on Friday June 26 2015, @03:37AM (#201377)

          Toppling your king, the best way to win is not to play

        • (Score: 2) by stormreaver on Friday June 26 2015, @03:37PM

          by stormreaver (5101) on Friday June 26 2015, @03:37PM (#201537)

          While it wasnt chess consider a chess problem. what move is required so your opponent cannot checkmate you?

          That's easy. The only way to win is not to play.

    • (Score: 0) by Anonymous Coward on Thursday June 25 2015, @09:43PM

      by Anonymous Coward on Thursday June 25 2015, @09:43PM (#201237)

      It's been a long time since I did any Physics, and in my day, us puny undergrads didn't get to do any computing as such, so what books should I read etc. to learn about this sort of stuff? My brain is pretty lame these days, so something that starts with the basics would be best.

      • (Score: 2) by MichaelDavidCrawford on Friday June 26 2015, @03:10AM

        I didnt use it but it is quite popular. I dont recall the titles of the books I used but they were all into FORTRAN.

        Every law of physics other than general relativity is in CERNLIB somewhere but despite wandering its gordian labyrinth for decades I have yet to actually find any.

        Google Runge-Kutta that should get you started. It is a way of avoiding the compounding of errors by overestimating as much as you underestimate.

        Ask terry schalk, tas at scipp dot ucsc dot edu to recommend some books. tell him i sent you, hes a good guy.

        --
        Yes I Have No Bananas. [gofundme.com]
    • (Score: 2) by kaszz on Thursday June 25 2015, @11:05PM

      by kaszz (4211) on Thursday June 25 2015, @11:05PM (#201281) Journal

      Perhaps you should have started a competing business instead considering the level of recruitment .. ;-)
      Search for "Open Graphics Project".

    • (Score: 2) by zugedneb on Thursday June 25 2015, @11:19PM

      by zugedneb (4556) on Thursday June 25 2015, @11:19PM (#201287)

      what for?
      what would they need a computational physicist for?
      one would think they need compiler writers and the sort...

      --
      old saying: "a troll is a window into the soul of humanity" + also: https://en.wikipedia.org/wiki/Operation_Ajax
    • (Score: 0) by Anonymous Coward on Friday June 26 2015, @02:57PM

      by Anonymous Coward on Friday June 26 2015, @02:57PM (#201521)

      I know this is off topic You put up an 'ask' question a few weeks ago that did not get thru the queue.

      I wanted to answer your question about turning into a lawyer. Dont. My sister did this (she went back to teaching it has a better health plan and per hour paid better). She found it was 100% dull ("all I do is fill out forms for other people and charge them for it"). You have people coming at you all the time trying to 'sue' other people over stupid things. She tells me stories of digging thru 40 years of paperwork.

      You are also always chasing people down to pay you. A co-worker of mine got a divorce a few years ago. Her lawyer told her 'you are the first person to pay me on time'. Then lets say you 'win a big case' sometimes who you win money from does not pay up, or your client will just decide not to pay you.

      You will also be going into a field where your fellow lawyers *enjoy* messing with you (mentally, physically, financially, and professionally).

      You may think "I will make lots of money" not really and not for many many years.

      You may think "I will help people". Probably not. Usually at most you can mitigate what they have done. One dude I know just decided not to show for court. Try doing anything gov related when you have a warrant for your arrest. I call them 'that paint is wet' people. They do not listen and then wonder how to get the paint off. There are *lots* of them out there.

      If you want a good idea what it is like, try being a clerk at some mom and pop office for a year or so. You will get a real good idea of what goes on. You may love it and then want to go for it. But for this sort of profession I say 'try before you buy'. Get the right dude and they may even help you pay for it if you like it. But go slow on this. It may seem prestigious but in the end its just a job.

      • (Score: 2) by MichaelDavidCrawford on Saturday June 27 2015, @12:55PM

        by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Saturday June 27 2015, @12:55PM (#202054) Homepage Journal

        oddly my story is marked Accepted but has not been published yet. I figure they are waiting for a lull in the submissions.

        My main interest is in working for the rights if the homeless and mentally ill. Often the laws are well established but are not enforced.

        There was an attorney in santa cruz who specialized in the homeless. When he was unable to pay his bar association dues they were paid anonymously by a judge.

        As a consultant I know all about deadbeat clients.

        --
        Yes I Have No Bananas. [gofundme.com]
  • (Score: 0) by Anonymous Coward on Thursday June 25 2015, @08:04PM

    by Anonymous Coward on Thursday June 25 2015, @08:04PM (#201207)

    I can give a good reason why they don't want to be so forthcoming with the intricate details of how their hardware works. Remember the day when a Sound Blaster branded card was the sound card of choice if you didn't want to be laughed at? Shortly after Creative Lab's peak popularity, clones started appearing. "Sound Blaster Compatible" became a marketing buzzword and real Sound Blasters took a sales hit. Eventually became as they are now; only one of many vendors for PC sound that exist today. They lost their market dominance.

    Right now, Nvidia has a nice little business arrangement going. Other card manufacturers pay THEM to use their chip sets in products. Releasing ANY of the information the OSS community needs could be very easilly seen as an unacceptable release of vital company secrets that could be used to reverse their technologies by businessmen in a committee. They don't want that obviously. This recent change is definitely a step in the right direction.

    • (Score: 3, Insightful) by maxwell demon on Thursday June 25 2015, @08:49PM

      by maxwell demon (1608) on Thursday June 25 2015, @08:49PM (#201222) Journal

      The times when programs talked to the hardware directly are long gone. "Sound blaster compatible" just meant that the card worked with programs that knew how to speak with Sound Blaster cards. The stuff that makes a good sound card isn't seen by the programs: It's a good D/A and A/D converter and good analogue hardware. If the other manufacturers really learned anything from Sound Blaster cards, they did so by looking at the card's circuits (or, more probably, by doing measurements on them), not by looking at the interface.

      --
      The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by tibman on Thursday June 25 2015, @09:36PM

      by tibman (134) Subscriber Badge on Thursday June 25 2015, @09:36PM (#201234)

      If NVIDIA didn't keep moving the goal posts then they would be like Creative Labs. Plenty of their older cards have been reverse engineered and surely copied. Sound cards are very different in that good enough is good enough. We don't need a soundcard that requires active cooling fans that bring the room temperature up two degrees (f) when running. Creative Labs could have gone into audiophile territory before anyone else. Sound cards with vacuum tubes should have been a thing, lol. Not sure what else they could have done?

      --
      SN won't survive on lurkers alone. Write comments.
    • (Score: 1, Insightful) by Anonymous Coward on Thursday June 25 2015, @09:36PM

      by Anonymous Coward on Thursday June 25 2015, @09:36PM (#201236)

      I can give a good reason why they don't want to be so forthcoming with the intricate details of how their hardware works. Remember the day when a Sound Blaster branded card was the sound card of choice if you didn't want to be laughed at? Shortly after Creative Lab's peak popularity, clones started appearing.

      As another anon states, the clones just provided the same interface to the hardware, tracing the circuitry it was easy to see how the SB worked. However, there is another reason that GPU makers are holding their proprietary driver blobs dear: Look at what happened to audio cards. There were all sorts of features, more mixing channels, bigger instrument banks, etc. Now none of that matters as it's all mixed CPU side, we just need an interface to select what speaker gets what batch of samples at what rate and we're done. The same is happening to GPUs.

      At first there was the Fixed Function Pipeline. GPU code could optimize around the interface to the fixed function pipeline. Now the programmable pipeline has made all such functional offerings obsolete. E.g., offerings based on how many dynamic lights the "card" supported are now completely redundant as pixel shader code can support as many or as few lights with as many or as few attributes as it wants to per pass -- The number of lights is not "hardware" dependent anymore, and it largely never was (there was some algorithmic hardware acceleration for some aspects of the fixed function pipe at first, but they quickly migrated to firmware [supplied by drivers]). With GPU Compute shaders and shared memory architectures the line between GPU and CPU is dissolving. Soon we''ll have back the freedom of pure software rasterization (which could freely interact with CPU memory, not needing separate geometry memory for networking, physics, etc, and no bottleneck between the physics / networking / rendering RAM). Just as we once had a separate FPU or Math Co-processor which are now integrated into the CPUs, the paralellization of GPUs will merge with CPUs and the only distinguishing feature will then be the driver code that OS's provide to developers.

      It's a losing battle to keep the drivers proprietary. Eventually they must be made openly available if we're ever to do online banking, email, etc. on hardware that utilizes the "GPU" as a component of ordinary processing, like the FPU is used now. GPU vendors know the future is coming and are vying to have their proprietary vectorization solutions positioned to become a defacto standard and thus have more control over their competitors.

      • (Score: 2) by kaszz on Thursday June 25 2015, @11:11PM

        by kaszz (4211) on Thursday June 25 2015, @11:11PM (#201284) Journal

        Will the GPU be generic and efficient enough to really make the CPU obsolete in order to merge with it in a meaningful way?
        GPUs are what I know very good for specific algorithms on large amounts of data. But generic processing is another thing.

        • (Score: 2) by TheRaven on Friday June 26 2015, @10:00AM

          by TheRaven (270) on Friday June 26 2015, @10:00AM (#201455) Journal
          The nVidia Project Denver architecture looks like it could be the start of this kind of convergence. It's based on the Transmeta designs after nVidia bought them and has an internal (private, undocumented) VLIW instruction set and an ARM decoder that performs trivial translation from ARM to VLIW. There's also a JIT that will trace ARM operations and compile hot code paths to a much more efficient encoding. The structure of the VLIW pipeline borrows a lot from nVidia GPU designs and it would be quite easy to imagine a variant with a much wider set of FPU pipes for SIMT code and a explicit instructions for turning them on and off that would allow the control and register renaming logic, as well as a subset of the pipelines, to be shared between the CPU and GPU.
          --
          sudo mod me up
          • (Score: 2) by kaszz on Friday June 26 2015, @10:32AM

            by kaszz (4211) on Friday June 26 2015, @10:32AM (#201462) Journal

            If they try to keep instruction sets secret when delving into the generic CPU area. They will have a problem..

            Would be nice if sellers started with a sticker [Nvidia free]. ;)

            • (Score: 0) by Anonymous Coward on Friday June 26 2015, @04:32PM

              by Anonymous Coward on Friday June 26 2015, @04:32PM (#201574)

              So Intel does publish the microcode instruction set of their CPUs? I don't think so.

            • (Score: 2) by TheRaven on Monday June 29 2015, @08:40AM

              by TheRaven (270) on Monday June 29 2015, @08:40AM (#202694) Journal
              As the other poster points out, x86 vendors do this: their CPUs translate x86 instructions into something totally undocumented (and do optimisations at this level, fusing micro-ops from different instructions into single micro-ops and even doing some quite clever things like recognising memcpy idioms). There's no real difference for nVidia doing this. It's quite nice to keep the real ISA secret, because it means that you can change it periodically. IBM has done this with their mainframes quite explicitly since the '60s, with a public ISA that's completely decoupled from the implementation.
              --
              sudo mod me up
      • (Score: 3, Interesting) by gman003 on Thursday June 25 2015, @11:45PM

        by gman003 (4155) on Thursday June 25 2015, @11:45PM (#201303)

        You have quite clearly never done any serious GPU programming.

        "Smart" sound cards, that did heavy audio processing, fell by the wayside for two reasons: low-end, dumb, integrated audio was good enough for enough people that dedicated sound cards became rare, too rare for most applications to bother supporting, and ore importantly, audio processing was easy and efficient to do on the CPU. Audio processing is basically multiply-accumulate buffers - attenuate this source by this much, then add it to the sound buffer. The only thing I can think of that wouldn't be done with FMA is pitch shifting, which I would guess is a FFT, still easy enough to do on a CPU. And it's a small, one-dimensional buffer - a second of stereo audio is only 172KiB, so you can have quite a number of buffers before the data set becomes too large.

        Compare graphics. First, the scale of the problem goes up exponentially because it's 2D, not 1D, buffers, and there's a lot more of them (instead of left/right audio, you have red, green, blue, depth, and alpha, and you may be compositing several full-screen buffers for one frame). A single 1080p buffer is about 8MiB, and you'll be using many of those.

        Next, rasterization is an implicitly parallel task. Efficient cores - superscalar, out-of-order, speculative-executing big CPU cores - don't work particularly faster than a single GPU "core", which is scalar, in-order and literally dumber than a Pentium Pro core. But a decent GPU has hundreds of ALUs, and a top-end one has thousands, while the widest CPU I know of is just 18 cores. GPUs are an embarrassingly parallel solution to an embarrassingly parallel problem. There are some old algorithms that could exploit CPU efficiencies better, but even modern software renderers don't use them. It's easier to just throw cores at the problem.

        And all those cores need to be fed. GPU caches can be small, because for the most part you process one tile and then move on, but the memory bandwidth needed is staggering. An entry-level gaming GPU will push about 80GiB/s in memory I/O, which beats even a top-end server CPU. The top end GPUs can push about 600GiB/s, and once HBM matures, we're looking at terabytes per second of data being pushed.

        We're never going to move graphics back to the CPU. Best-case, we'll have an onboard GPU that's fully-programmable, with a published ISA and the few remaining fixed-function blocks moved into code, attached to a CPU with a far, far wider memory bus than today. But if we hit a wall on processor scaling before we hit a wall on LCD resolutions, we'll still have discrete GPUs just to manage the heat dissipation.