Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday February 07 2019, @10:11AM   Printer-friendly
from the next-up-is-LEG-graphic-cards dept.

Adobe Considers Manufacturing Custom Processors

Adobe certainly isn't the first company to consider making--or actually make--its own chips. Axios noted that Apple, Google, Samsung, and Amazon already do just that. (And speculation runs rampant among the Apple community about if or when the company will decide to ditch Intel for good.)

Those companies don't make their own chips for the fun of it. They do it because it gives them more control over their products, rather than forcing them to make their software for standard hardware. The idea is that this leads to better performance while also reducing dependence on outside companies. Imagine that right now every product is like a flavor dust applied to a Lay's chip. Eventually, someone was going to make their own spuds from scratch.

From that perspective, Adobe making its own chips would make sense. Its software is an ecosystem unto itself—there are people out there whose livelihoods are directly affected by their proficiency with and performance in Adobe's creative tools. (Sorry, sorry, the Adobe Creative Cloud subscription service. Branding!) Improving performance with custom silicon would help those people and, of course, give Adobe yet another way to make itself all-but-indispensable to creators.

[...] Axios quoted Adobe CTO Abhay Parasnis as saying: "Do we need to become an ARM licensee? I don't have the answer, but it is something we are going to have to pay attention to."

Also at Axios.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Disagree) by bzipitidoo on Thursday February 07 2019, @03:18PM (5 children)

    by bzipitidoo (4388) on Thursday February 07 2019, @03:18PM (#797785) Journal

    > adobe products are ram and cpu pigs

    And storage pigs too. PDF is an extremely wasteful format that can somehow burn up an entire megabyte to store 10 pages of text. So, yes, some of that is the decision to work around font issues by allowing font data to be bundled in the file. However, it's still terribly wasteful even without included fonts. Did you think word processor file formats were bad? Thought HTML was bad? Well, PDF is worse.

    Because PDF can embed images, one of the lazy crazy things I frequently see is the PDF of scanned text. Some printed text was scanned, but not OCRed, and the PDF format was used merely to bundle the scanned images into one file. It gets worse. A PDF of images doesn't have selectable text. So I've seen a few research papers that have both the scans and the OCRed text. They used OCR on the text, and didn't bother with any cleanup of the OCR job, which is understandable considering how human and time intensive a job that is currently. Then threw them together, so that the text is selectable, and the scan is still present so the readers can correct OCR problems themselves as desired.

    At least PDF is a little better than PS. They used shorter names for the functions.

    Starting Score:    1  point
    Moderation   0  
       Disagree=2, Total=2
    Extra 'Disagree' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Thursday February 07 2019, @04:38PM (2 children)

    by Anonymous Coward on Thursday February 07 2019, @04:38PM (#797829)

    PDF is not to blame for a 1 MB file representing 10 pages of text. That's on the person (or, I suspect, scanner) that generated the file. OCR exists for a reason.

    When generating from scratch via a good editor - like PDFLaTeX - PDF is far more efficient than almost any layout engine I am aware of. Huge bonus points for being fully cross-platform with reliable formatting on all platforms. Yes, this occasionally means embedding fonts.

  • (Score: 3, Insightful) by lentilla on Thursday February 07 2019, @04:45PM (1 child)

    by lentilla (1770) on Thursday February 07 2019, @04:45PM (#797836)

    one of the lazy crazy things I frequently see is the PDF of scanned text

    Storing images of scanned text in a PDF is not quite as bonkers as one might initially imagine. Three reasons:

    1. Everyone [believes they] understand what a PDF is. They double-click on the document and it opens - nice and easy.
    2. PDFs can contain multiple pages. Yes; there are raster formats that do the same; but your average person doesn't know about that and; even if they did; "image software" doesn't always implement that feature. Again: PDFs are understood to contain multiple pages - an "image" is understood to be a "photo".
    3. CCITT compression. Otherwise known as "Group 4 Fax", this does a brilliant job of compressing bi-colour ("black-and-white") in a lossless fashion. Scan a page of text, it compresses efficiently, and you obtain a very reasonable facsimile when you print it out at the other end.

    So, if I want to send someone an electronic copy of my water bill that I received in the mail, I'm going to turn it into a PDF. No, I'm not going to attempt to OCR the text - not if all someone wants is what amounts to a photocopy from days gone by.

    • (Score: 2) by bzipitidoo on Thursday February 07 2019, @07:32PM

      by bzipitidoo (4388) on Thursday February 07 2019, @07:32PM (#797904) Journal

      Nevertheless, PDF is inefficient, even on plain text. Specifically, the language is very redundant. A typical PDF text document divides text along word boundaries. For each word, the distance between the letters of that word is set with a command. Then the space between that word and the next is given. The next word likely has a slightly different spacing between the letters, requiring another use of the letter spacing command. Not great, but not too horrible either.

      However, some state info is required with the commands. The worst may be the opacity. PDF allows for some transparency, but so often what is wanted is 100% opacity. Because the state of opacity can't be set once, it has to be specified over and over and over and over, for every single word. Even worse is if the document is color. Then the opacity of each of red, green, and blue must be set every single time the letter spacing is changed. Ugly.

      > CCITT compression. ... Scan a page of text, it compresses efficiently

      Compression is great, but it shouldn't be used as a crutch to clean up after a dreadfully and needlessly verbose standard. As a general rule, you are going to get smaller file sizes by leaving out as much redundancy as you can before you turn to data compression.

      Another problem for which compression can't always compensate well is bad encoding. A PDF can be black and white, or color. If it has only black and white but was generated as a color document, it could be as much as 50% larger even though compressed. Further bloat comes from throwing in additional unnecessary PDF commands between each word. A good PDF optimizer can fix those sorts of issues. But most people aren't going to bother with that. Better if PDF creation software didn't add useless bloat in the first place. Even better if PDF didn't lend itself to such bloat.