Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Wednesday January 13 2016, @11:23AM   Printer-friendly
from the cord-cutters-ftw dept.

The average American watches more than five hours of TV per day, but pretty soon that leisure time may be dominated by YouTube and other online video services.

In an address at CES 2016, YouTube's chief business officer Robert Kyncl argued that digital video will be the single biggest way that Americans spend their free time by 2020 – more than watching TV, listening to music, playing video games, or reading.

The amount of time people spend watching TV each day has been pretty steady for a few years now, Mr. Kyncl pointed out, while time spent watching online videos has grown by more than 50 percent each year. Data from media research firm Nielsen shows that it's not just young people watching online videos, either: adults aged 35 to 49 spent 80 percent more time on video sites in 2014 than in 2013, and adults aged 50 to 64 spent 60 percent more time on video sites over the same time period.

Why the shift?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by bzipitidoo on Wednesday January 13 2016, @02:09PM

    by bzipitidoo (4388) Subscriber Badge on Wednesday January 13 2016, @02:09PM (#289056) Journal

    Textbook programming also simplifies by ignoring "irrelevancies". Typical algorithm textbook pseudocode has no worries about types, overflow, array sizes and bounds. heap space, or I/O. The Turing Machine's tape is infinite. And I have found that can lead to errors. For instance, Quicksort is always stated to take time O(n log n) (on average, worse case is O(n^2)), but that makes a big assumption, which is that a single comparison can be done in O(1). Yet string comparison is well known to take O(n) time. How can Quicksort be done in O(n log n) time on strings?

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by curunir_wolf on Wednesday January 13 2016, @02:18PM

    by curunir_wolf (4772) on Wednesday January 13 2016, @02:18PM (#289059)
    A "string" is NOT a data type.
    --
    I am a crackpot
    • (Score: 1) by Shimitar on Wednesday January 13 2016, @02:21PM

      by Shimitar (4208) on Wednesday January 13 2016, @02:21PM (#289060) Homepage

      Ssshh... it's a Javascript "scientist" :)

      --
      Coding is an art. No, java is not coding. Yes, i am biased, i know, sorry if this bothers you.
    • (Score: 2) by tibman on Wednesday January 13 2016, @03:07PM

      by tibman (134) Subscriber Badge on Wednesday January 13 2016, @03:07PM (#289078)

      For those who want to know why. A string is an array of char (a character). Char is a datatype because it is a fixed size in memory, just like int and float and other primitives.

      --
      SN won't survive on lurkers alone. Write comments.
      • (Score: 2) by Pino P on Wednesday January 13 2016, @06:12PM

        by Pino P (4721) on Wednesday January 13 2016, @06:12PM (#289163) Journal

        char (a character)

        The data type char does not fully represent a character. In C, it represents a UTF-8 code unit; in Java, it represents a UTF-16 code unit. However, a character won't fit in either of those.

        How many characters is é (Latin small letter E with acute)? What about y̾ (Latin small letter Y with combining vertical tilde)? Or 加 (CJK ideogram meaning "add") ? Or 💩 (pile of poo)? Or 💩̾ (pile of poo with combining steam)?

        Each of these five is one "grapheme cluster", though all five are more than one UTF-8 code unit, three are more than one UTF-16 code unit, and two are more than one code point (so UTF-32 won't help). See the UTF-8 Everywhere manifesto [utf8everywhere.org] and why Swift's string API is so messed up [mikeash.com] to learn how "character" isn't a data type either.

        • (Score: 2) by tibman on Wednesday January 13 2016, @07:06PM

          by tibman (134) Subscriber Badge on Wednesday January 13 2016, @07:06PM (#289206)

          A char is typically one byte. Historically, char is short for character and represents an ascii character. Only higher level stuff cares about what a collection of chars represents. UTF-8 is a multi-byte encoding so by definition you cannot pre-allocate space for a UTF-8 character unless you already know what it is. You also cannot know how many bytes a UTF-8 character is unless you decode it. One UTF-8 character may be one byte or it may be four. So there will never be a primitive datatype for UTF-8. When you are talking about UTF-8 then you might as well be talking about strings or some other dynamic structure. I think only UTF-32 could be made into a primitive and a UTF-32 char (4 bytes, fixed) would indeed fully represent any character.

          --
          SN won't survive on lurkers alone. Write comments.
          • (Score: 2) by Pino P on Wednesday January 13 2016, @07:33PM

            by Pino P (4721) on Wednesday January 13 2016, @07:33PM (#289229) Journal

            Each of these five is one "grapheme cluster", though ... two are more than one code point (so UTF-32 won't help).

            a UTF-32 char (4 bytes, fixed) would indeed fully represent any character

            A UTF-32 code unit does indeed represent any code point. But because not all characters of a script are available precomposed [wikipedia.org], a single grapheme cluster may span more than one code point if it has combining diacritics attached to it. Nor is it very useful to divide a string between the code points that make up a grapheme cluster. That's what I meant to get across by including the examples of y̾ (y with vertical tilde) and 💩̾ (poo with steam): there is no fixed-width data type that can represent all characters.

            • (Score: 3, Insightful) by tibman on Wednesday January 13 2016, @09:58PM

              by tibman (134) Subscriber Badge on Wednesday January 13 2016, @09:58PM (#289287)

              You are talking about combining characters, yes? You are taking two characters from UTF-32 and combining them: http://www.fileformat.info/info/charset/UTF-32/list.htm [fileformat.info]
              Just because two characters occupy the same space on the screen that doesn't make them one character.

              This argument is getting silly. A char is a datatype in low-level languages and a string is not. You are arguing with my explanation using historical words outside of their intended context. Historically char was short for character. It is also a good way for a layman to understand what char is. I was only trying to clarify someone else's not so clear remark. You are not helping me with that endeavor and it's pedantry bordering on trolling or something. If the char datatype isn't designed to hold a character then what is it used for?

              --
              SN won't survive on lurkers alone. Write comments.
              • (Score: 2) by Pino P on Thursday January 14 2016, @04:06PM

                by Pino P (4721) on Thursday January 14 2016, @04:06PM (#289533) Journal

                If the char datatype isn't designed to hold a character then what is it used for?

                Let me try to sum up your argument and mine in a manner that addresses the point at hand: The data types called char were originally designed to hold a character back when users of computing were members of cultures whose languages that used few characters. As the number of cultures served by computing has grown, the data types called char have since become insufficient for that purpose.

  • (Score: 2) by VLM on Wednesday January 13 2016, @02:31PM

    by VLM (445) on Wednesday January 13 2016, @02:31PM (#289063)

    Yet string comparison is well known to take O(n) time.

    Its O(constant) just traditional to call it 1. On any set of finite strings the comparison will never take more than a constant, that constant based on the max string length or maybe finite sized data type or finite sized machine. The number of strings will have no impact on run time.

    The biggest screw up with scalability is not understanding the problem. Best case insertion sort is O(n) given one new entry to add to a pre-sorted set, and quicksort is way worse at O(nlogn) and I got into a huge workplace argument years ago with a guy who apparently thought input has no effect on scalability. See I agreed with him that if you feed a QS a random pile of data its nlogn and insertion is n squared so QS is way faster for random data, but we're not sorting random data we're sorting already sorted data... I may be forgetting some details.

    • (Score: 2) by VLM on Wednesday January 13 2016, @02:34PM

      by VLM (445) on Wednesday January 13 2016, @02:34PM (#289064)

      The number of strings will have no impact on run time.

      The runtime of any individual comparison, to be specific. Is X>Y has no runtime impact based on having a million or a trillion other comparisons to check later. Obviously in practical use, today's value of "n" will have impact on a sort's total wall clock time.

  • (Score: 0) by Anonymous Coward on Wednesday January 13 2016, @02:59PM

    by Anonymous Coward on Wednesday January 13 2016, @02:59PM (#289072)

    Textbook programming also simplifies by ignoring "irrelevancies". Typical algorithm textbook pseudocode has no worries about types, overflow, array sizes and bounds. heap space, or I/O. The Turing Machine's tape is infinite. And I have found that can lead to errors. For instance, Quicksort is always stated to take time O(n log n) (on average, worse case is O(n^2)), but that makes a big assumption, which is that a single comparison can be done in O(1). Yet string comparison is well known to take O(n) time. How can Quicksort be done in O(n log n) time on strings?

    Wait, the time to compare two strings in the collection depends on the number of strings in the collection?

    If you have a collection of nstrings strings whose average length is ncharacters per string, then the sorting time is O(nstrings log nstrings), but O(ncharacters per string). Note the different variables in the big-O notation. You can change the average length of the strings in your collection independently of the size of the collection. You can have a collection of 20 strings, each a million characters long, or a collection of a million strings, each 20 characters long. While the total number of characters is the same, the sorting time for both will differ dramatically.

    • (Score: 2) by bzipitidoo on Wednesday January 13 2016, @03:30PM

      by bzipitidoo (4388) Subscriber Badge on Wednesday January 13 2016, @03:30PM (#289087) Journal

      Yes, you spotted the main trick to that trick question, different n's :).

      And yet, the two quantities are not completely unrelated. As the number of strings grows, the match length also grows. Supposing you have an alphabet of 26 letters, and 27 strings to sort. At least 2 of the strings must start with the same letter. With 26^2+1 strings to sort, that grows to 2 matching letters at the start for at least 2 of the strings.

  • (Score: 2) by vux984 on Wednesday January 13 2016, @06:24PM

    by vux984 (5045) on Wednesday January 13 2016, @06:24PM (#289174)

    You are abusing Order notation; and stating the problem in a confusing manner. Although there is some technical merit to your argument.

    Yet string comparison is well known to take O(n) time.

    Order notation is to measure asymptotic behavior as a problem size grows.

    String comparison is O(n) asymtotically; that is as the number of elements in the strings grows arbitrarily large the time to compare two strings grows linearly. That's what O(n) means. If the strings are constrained to a finite length then they can be considered O(1).

    Likewise Qsort is asymtotically O(n log n); that is, as the number of elements to be sorted grows arbitarilty large the time it takes to qsort them grows at the rate of: n log n.

    So for all practical purposes Qsort takes O(n log n) even on strings.

    However, yes, if you were actually interested in representing the time to quick sort strings where BOTH the size of the strings AND the number of strings to be compared were BOTH allowed to grow unbounded. Then yeah, its O(m * n log n ). Where n is the number of elements and m is is the size of the elements.