Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


cafebabe (894)

cafebabe
(email not shown publicly)

Journal of cafebabe (894)

The Fine Print: The following are owned by whoever posted them. We are not responsible for them in any way.
Monday June 26, 17
04:09 PM
Career & Education

I have difficulty collaborating with other people. It may be that I'm a cantankerous curmudgeon but I suspect this is (also) a wider problem.

I write software. I dabble with analog and digital electronics. I do computer system adminstration and database administration. I prefer to write in C (for speed) and Perl (because the interpreter is everywhere). I greatly prefer to avoid proprietary environments. Although I've been described as being to the left of Richard Stallman, my interest in open licences comes primarily from not being able to use my own work. (Others take a much more hard-line approach and refuse outright to work on anything which won't be GNU Public Licence.)

I'm willing to work almost anywhere in the world. I'm willing to work with micro-controllers, super-computers, Lisp, PHP, bash, Linux, BSD, ATM, SPI, I2C, CANBus and numerous other concepts. I prefer to work with content versioning (any is better than none) and repeatable processes. However, I'm willing to work in an environment which scores 4 out of 12 on the Joel Test.

I have difficulty getting enthused about:-

  • "Apps" where Apple or Google take 30%. This isn't a good deal for me or many clients.
  • node.js or other JavaScript - especially in conjunction with robot control.
  • C++, C#, Objective C, D or any other derivative of C.
  • Anything over radio - especially IEEE802.15.4
  • Block-chain systems in their current form.
  • Facial recognition.
  • Neural networks.
  • "User experience."
  • "Cloud."

I'm more enthused about FPGA and GPU programming but I like to have an abundance of hot-spares and, for me, that significantly increases a barrier to entry. This hardware also makes me edgy about being able to use my own work in the long-term.

Within these parameters, I'd like to describe some failed explorations with other techies. Without exception, I first encountered these techies at my local makerspace.

One techie is a huge fan of Mark Tilden, inventor of the RoboSapien and numerous other commerialized robots. Mark Tilden has an impeccable pedigee and has been featured in Wired Magazine and suchlike over 20 years or more. Mark Tilden espouses a low-power analog approximation of neurons with the intention of the neurons forming a combinatorial explosion of resonant states. When implemented correctly, motor back-EMF influences the neurons and therefore a robot blindly adjusts its (resonant) walking gait in response to terrain.

Anyhow, my friend was impressed that I'd used an operational-amplifier as an analog integrator connected to a VCO [Voltage Contrlled Oscillator] for the purpose of controlling servo motors without a micro-controller. My friend hoped that we'd be able to make a combo robot which, as a corollary, would perform a superset of 3D printing functionality without using stepper motors or servo motors. This is fairly ambitious but it would be worthwhile to investigate feasibility.

To complicate matters, I'm in a situation where modest financial success in this (or other) venture would gain my full-time attention. Whereas, he's currently receiving workman's compensation due to an incident at his electro-mechnical engineering job. So, I have every reason to commercialize our efforts and he has exactly the opposite motive.

There's also the scope of robot projects. They tend to be fairly open-ended and this in itself is cause for project failure. Therefore, I tried to nudge my friend into a project with a well-defined scope, such as high-fidelity audio reproduction. It is from attempts to make this situation work that I found that PAM8403 audio amplifiers were ideally suited to control motors. However, this discovery and further discoveries around it have not enthused my friend. (Note that the specification, as it exists, was to make an analog robot and avoid the use of stepper motors or servo motors. Admittedly, this fails to incorporate back-EMF but it is otherwise satisfactory.)

He's had personal problems which include his cheating Goth girlfriend dumping him and giving away his cat. He's also got his employer asking him to come back to work. Understandably, our collaboration has stalled.

An ex-colleague wants to collaborate in different projects. So far, we've had one success and one failure. The success contributed to a mutual ex-colleague's first Oscar nomination. The failure (his PHP, my MySQL) completely failed to mesh but led to a customized aggregator responsible for many articles on SoylentNews.

He's currently proposing an ambiguous deal which appears to be a skill-swap where we retain full rights on projects that we initiate. However, many of his projects involve tinkering with ADC and/or LCD. This would have been highly lucrative in the 1980s. Nowadays, it is somewhere between economizing and procrastinating. He previously worked on 100 Volt a monophonic valve amplifier but that was about three years ago and he never bothered to commercialize it.

Prior to this, I worked with friends on LEDs for an expanding market. We worked through the tipping point where the NFL and Formula 1 switched to LEDs. Two years ago, we were four years behind the market leader. Since then, we have completely failed to sell any LEDs to anyone. The majority have been using the venture for resumé padding and I've been out-voted on features even though I'm the only person who worked in the industry and worked with customers on a daily basis. After verbally being offered less than half of an equal share in my own venture, I haven't seen one of the active co-founders for 10 months. It would be reasonable to say that people are avoiding me. I find this bizarre.

Regardless, my task was to make a secure controller. The primary resumé padder wanted something which looked like an Apple TV. (This is for an industrial controller which may be exposed to 100% humidity and is required to meet ISO 13850.) I tried various options while being resource starved. I found that the software specification has yet to be achieved on a modern system. I found that the hardware specification is very difficult without an obscene budget. I also found that development on ARM has been difficult until relatively recently. The most accessible ARM system, a Raspberry Pi, didn't gain hardware floating point support under Linux until May 2016. Parties with successful projects (mostly phones) like these high barriers to entry. (How did Cobalt Networks launch successful RISC products in 1998? By using MIPS rather than ARM - and then switching to x86.)

In recent discussion about Raspberry Pi usage, you'll see how I've been influenced over the last two years or so about long-term support, hardware and packaging, micro-controller code density and other matters. I've also added options for documentation and training. However, without aligned collaborators or sensible resources, I'm not progressing. I suspect others are in diverse but analogous situations. Do you want to collaborate?

Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Monday June 26 2017, @07:53PM

    by Anonymous Coward on Monday June 26 2017, @07:53PM (#531494)

    I have difficulty collaborating with other people. It may be that I'm a cantankerous curmudgeon but I suspect this is (also) a wider problem.

    You know I'm bad at communication, it's the hardest thing for me to do
    And it's said, it's the most important part that relationships will go through
    And I'd give it all away
    Just so I could say
    That I know, I know, I know, I know that you're gonna be OK anyway

    You know there's no rhyme or reason for the way you turned out to be
    I didn't go and try to change my mind not intentionally
    I know it's hard to hear me say
    It but I can't bear to stay
    And I just know, I know, I know, I know that you're gonna be OK anyway

    Always keep your heart locked tight,
    Don't let your mind retire, oh
    But I just couldn't take it,
    I tried hard not to fake it
    But I fumbled it when I came down to the wire

    It felt right, it felt right, oh
    But I fumbled him when I came down to the wire
    It felt right, it felt right, oh
    But I fumbled him when I came down to the wire

  • (Score: 2) by kaszz on Tuesday June 27 2017, @06:47AM (2 children)

    by kaszz (4211) on Tuesday June 27 2017, @06:47AM (#531799) Journal

    Perhaps you need to evaluate and select a project with reasonable chance of success. And then focus on that to drive it to fruition. If some people won't work for you. Then change your company. Not every combination of people works out, accept it and move on to new relationships.

    To make sure a project won't swell. Reduce the feature specification to a minimum but keep the parts that enables future versions to be better (640k is enough....). Hire away as much as you can, if possible or buy.

    To conclude:
      * Evaluate
      * Focus
      * Enabler friends, not ballast.

    • (Score: 2) by cafebabe on Saturday July 01 2017, @11:03PM (1 child)

      by cafebabe (894) on Saturday July 01 2017, @11:03PM (#534019) Journal

      I'm wondering if the three month business incubators are worthwhile or if their brevity just allows them to spread risk much further over a given period. I cannot find the reference but I've seen an argument that successful projects should be shorter than one cycle of Moore's law. Start with a one million dollar idea. If it takes two cycles to execute then it may only be worth 1/4 million dollars when finished. However, a lower value idea executed over a much shorter period retains most of its value against the "inflation" of technological progress. On this basis, I wonder if I should sequentially engage in far less significant projects and hope that serendipity provides further upside. By accident or design, this seems to be the approach taken by venture capitalists.

      Regarding minimal product, I would have dearly liked to ship a stand-alone 1kW LED implementation which can only be switched via its mains power supply. The others wanted to supply four boxes - power regulation plus one box for each tier of the full scale-out system.

      Regarding partners, I find that their level commitment is far lower than I'd hope. Invariably, this is because they have wavering health, family commitments and/or a lucrative income. I see why venture capitalists take a team fresh out of education. It isn't direct ageism but they tend to be fairly healthy, without time-consuming family roles or attached to a large salary.

      --
      1702845791×2
      • (Score: 2) by kaszz on Sunday July 02 2017, @08:26AM

        by kaszz (4211) on Sunday July 02 2017, @08:26AM (#534118) Journal

        The success of a project depends on the specifics of that project. Generalizations rarely stick. If you understand the project core idea, then you can also understand what will help and what will not. As for incubators have a look at the business contract with them.

        Considering the networking trend. I would make sure that any product could be extended with such capability. And that the product can scale. As for LED that may entangle high frequency PWM and letting the input to that be available on some external jack at minimum. Or have room for another circuit board. Making it possible to insert a relay into the power loop may also be worth considering. And if the product is to scale then modularisation may pay by not having to implement everything into every box.

        It seems you have to redo the partner thing. Make sure they are able to commit and will commit. Maybe you could have better success talking with students?

  • (Score: 0) by Anonymous Coward on Tuesday June 27 2017, @08:43AM

    by Anonymous Coward on Tuesday June 27 2017, @08:43AM (#531813)

    Commercialization is the act of casting pearls before swine.

    Better to be homeless and sing on the street, than commercialize anything ever.

    Be like MichaelDavidCrawford, for his life of virtuous homelessness is a shining example that material things are unnecessary for a fulfilling life.

    Wait, what? MDC got a job? That fucking sell-out!! What a piece of shit!!!

    MDC is a pig like all the other swine!!!!!

    In conclusion, collaborate with MichaelDavidCrawford. And fuck the both of you scum.

  • (Score: 2) by turgid on Friday June 30 2017, @07:58PM (2 children)

    by turgid (4318) Subscriber Badge on Friday June 30 2017, @07:58PM (#533653) Journal

    I have difficulty collaborating with other people. It may be that I'm a cantankerous curmudgeon but I suspect this is (also) a wider problem.

    In the olden days they might have called you "fussy." I'd just say that you're probably very careful about choosing to avoid things that are a waste of your time and energy,

    Even when there's a substantial amount of money involved, it can be hard work just to fill your brain with less than interesting things, things you may have done before and things that you can see could be done much better another way.

    Or maybe I'm projecting...

    • (Score: 2) by cafebabe on Saturday July 01 2017, @11:36PM (1 child)

      by cafebabe (894) on Saturday July 01 2017, @11:36PM (#534026) Journal

      You're not projecting. I wonder if I'm too fussy or not fussy enough. I quite obviously avoid trendy technology but I have a deep reservation that the leading edge of development provides good value outside of the computer industry. Since the 1960s, large companies purchased large computers with mixed results. By the 1980s, small companies purchased small computers with mixed results. In the 1990s, web development could obtain obscene amounts of money. In the 2010s, apps typically cost more and provide less. Specifically, some of the HTML from the 1990s still works. Whereas, apps developed for an iPhone 3 don't work on an iPhone 7 and aren't available to legacy users.

      --
      1702845791×2
      • (Score: 2) by kaszz on Sunday July 02 2017, @08:28AM

        by kaszz (4211) on Sunday July 02 2017, @08:28AM (#534119) Journal

        In the 2010s, apps typically cost more and provide less. Specifically, some of the HTML from the 1990s still works. Whereas, apps developed for an iPhone 3 don't work on an iPhone 7 and aren't available to legacy users.

        This will probably shape the future..

  • (Score: 1) by xyz on Saturday July 01 2017, @07:41AM (6 children)

    by xyz (6633) on Saturday July 01 2017, @07:41AM (#533841)

    You seem to have some similar interests / skills and interesting tech posts.

    "However, without aligned collaborators or sensible resources, I'm not progressing. I suspect others are in diverse but analogous situations. Do you want to collaborate?"

    I understand, I have been in an analogous situations. It can be hard to get a successful collaboration going and surviving through the different phases of it due to having to come to agreement on a sensible project and the mode of its pursuit, time pressures, resource pressures, people's changing interests / priorities.

    I'm at the point where I know enough or can easily figure out / learn enough to do "almost anything" technically using some tools or others that I can deal with, but I'm feeling somewhat inhibited by not having more good resources and useful options for collaboration. Lots of things are interesting intellectually and seem worth doing but if it takes person-months / years to make useful progress working without enough collaboration / resources then that's discouraging wrt. effectiveness and productivity.

    And even for the "relatively easy quick things" it could be helpful to have collaboration since to do something "right" (or at least get usefully sharable / exploitable output (documentation, publication, demonstration, quality, polish, critical feedback / refinement, customers, whatever)) still takes a good amount of effort and resources.

    So lacking the present availability of my usual collaboration pals, I'm thinking it could be useful to widen the net and be open to new ideas / potential collaborations. Perhaps for better "networking", inspiration, criticism if nothing else. But also ideally some kind of technical / productivity synergy wrt. actually bringing things to fruition more expediently / effectively and with higher quality / probability of successful short / intermediate term impact in some area(s).

    I'm interested by FPGA and GPU programming also. As for the latter, OpenCL can be a useful abstraction layer since there are free runtimes that can be made to work on standard CPU processors as well as supported runtimes offered using the proprietary GPU drivers / HW from Intel, AMD, NVIDIA, and others. Even some pretty low end GPUs still have at least enough shader processors / compute units that they can be useful for simple tasks and certainly for code development / testing even on their "almost bottom of the line" add-on processors. And even lots of motherboards' integrated GPUs from Intel / Nvidia / AMD in recent years are new enough in architecture that they offer at least something useful for development. So as for "hot spares" well I guess a couple of PCs and a couple $20+ (just a guess, I haven't been shopping for the past few card generations) PCIE GPUs should do it for CPU based and GPU based test platforms. But cost and resource consumption (power, heat, ...) is a problem if you specifically want/need to set up a cluster of multi-teraflop GPUs between needing seriously decent host PC platforms as well as expensive and somewhat potentially short lived GPUs. I've been there and done that.

    As for FPGAs, well the proprietary tools and HW architectures are disappointing wrt. open tool ecosystems. But the good news is that the HW is getting quite inexpensive and prolific relative to what it was. And the general programming is pretty portable (assuming you stay away from vendor specific IP cores and IC features). q.v. LEON2/3, RISC-V, HLS. Some Cyclone II/III/IV/V, Spartan3, Spartan6, or MAX10 devkits are "sort of" inexpensive depending on what you want from them in terms of functions and capacity. AFAIK someone reverse engineered some of the ICE FPGAs (ICE40? all? some?) and now there's a libre open source toolchain that supports those. Quite small cheap devices / boards, but still, cool. The main question besides "just for learning" is what you want / need from them in terms of actually deploying them in a custom solution. The problem is like the Raspberry PI but worse. The off the shelf dev kits can be "cheap" but also lack quality / features / sane integrability. The "commercial quality" SOMs / SBCs are expensive in low quantities usually (or are very limited if not). And to make a custom PCB that isn't only suitable for the lowest couple of tiers of easy / cheap to use devices (TQFP144 or such package type) is a significantly time consuming and costly proposition in low volumes. Very possible even semi-DIY given a strong need / reason, but not really cost justified if a cheap devkit will serve the purpose instead. $35 buys a RPI kit but building the like of one from chips quantity 1 could cost $2000 or something wrt. the PCB and assembly and high cost in low volume parts etc. etc.

    The "security" or at least "robustness" of controllers is indeed an issue. And devkits are often horribly abused to control things that they really should not be used to control for general reliability reasons as your examples indicate. It seemed to me like someone should make a proper "vertical" controller "stack" from HW through SW according to best practices / accepted models that is open / cheap or whatever so maybe there would be less "you've got to be serious, you're controlling THAT with an Arduino???" situations. It is an idea I have been considering.

    And then there's the "unfit for the purpose" cheap imported electronics which are bad enough when they are used for unimportant things but then you get into control systems or appliances or whatever that should be "fail safe" and it is just a disaster seeing these things with MTBFs of ~0 days wreaking havoc....

    So, still looking for collaborations?

    • (Score: 2) by cafebabe on Sunday July 02 2017, @11:43AM (5 children)

      by cafebabe (894) on Sunday July 02 2017, @11:43AM (#534146) Journal

      I would definitely like to collaborate with you. You very probably have similar or complimentary ideas. I have a number of ideas which work in isolation and combine into something more impressive. Some of the ideas may seem incredulous but I can justify most of it with example implementations or worked examples. The overall theme would be home entertainment. Specifically, making affordable implementations of existing high-end systems. However, the full scope covers codecs and a desktop windowing system which is intended to maximize legacy binaries. (The fractal animation [soylentnews.org] and the torus animation [soylentnews.org] look like fluff but they were written as torture tests for a video codec.)

      My interest in FPGA or GPU programming is primarily for network acceleration. Specifically, performing packet re-encryption at the edge of a home or office network so that consenting users gain a shared cache. Network caching already exists but not in manner which works well (or at all) with the codecs or desktop windowing. A CPU implementation handling 50Mb/s would be sufficient for many small installations but bandwidth demands have a habit of growing. Rather than ganging together multiple units which each process 50Mb/s or so, it would be great to have an option which processes 1Gb/s or more. The stub implementation would be a CPU implementation which nominally processes a set of network packets. This eases GPU implementation where each packet within a set is processed in parallel by one or more GPU cores. (A similar arrangement would be useful for codec implementation.) At present, knowing the details of CUDA or OpenCL are useful for the purpose of easing future scalability.

      Anyhow, I'll stop posting fluff and apparent fluff. Instead, I'll explain each idea in a separate article. Improvements and, specifically, simplifications are most definitely welcome. Hopefully, we'll organize this into a specification which has immediate goals, next version goals and future goals. And then we implement.

      --
      1702845791×2
      • (Score: 1) by xyz on Monday July 03 2017, @11:54PM (4 children)

        by xyz (6633) on Monday July 03 2017, @11:54PM (#534614)

        Sounds great.

        Video CODECs are interesting. I was speculating a while back how it could look if some CODEC algorithm was to be designed to be "efficiently" implemented using GPU programmable shader capabilities. As far as I recall GPU based decoding / encoding / transcoding of "standard CODECs / streams" wasn't (in years past when I looked into it a little bit) shown to be that effective then again that was with ~1st generation GPUs (slow / limited by today's standards) and also the "standard CODECs" were designed for ASIC implementation and not efficient programmable implementation. And anyway compared to an ASIC that one is going to make millions of one can't really say that anything programmable is area / cost "efficient" but that only means that if you're Qualcomm or Broadcom or such you should prototype with FPGAs / GPUs and then make an ASIC for mass production. Programmable GPU / FPGA technologies can be very effective for cases where making ASICs is not time / cost justified including for lower volume / faster time to market stuff.

        Network processing / acceleration / cacheing is also interesting. I think 1Gbit/s throughput for a lot of network processing is probably fairly readily achievable with relatively commodity technologies whether server / desktop class CPUs or consumer class GPUs or low-medium range FPGAs depending on the particular details of the task at hand. Some kinds of standard encryption relevant to networking are of course slower than others and some media encoders can be pretty resource intensive. I have seen network processor systems with 100Gb+ class throughput though they did not focus on media just data communications, and of course that gets expensive to make / buy pretty quickly.

        There are off the shelf NPUs and accelerator ICs of various sorts that are designed to handle traffic at wire-line rates usually 1Gb/s and multiples of that depending on the task at hand. When / if they become cost effective to use in a system is debatable depending on the goals / requirements / resources.

        For R&D purposes beyond what could be easily simulated / run in real time "at home" it could be interesting to use cloud compute instances since those are of course available "cheaply" with scalable data center class CPU / network facilities readily available as well as maybe even instance attached GPU or FPGA resources in some of the newer offerings that I haven't investigated.

        Over the years I have done experiments with setting up "edge" or localhost servers (home use) for things like SQUID, Privoxy, NAS, IDS, etc. as well as small home experimental "HPC" clusters.

        It does seem like there's still good opportunity to "appliance-ify" the sea of technological resources that a well equipped home or small business has access to. Relatlively powerful but still ordinary PCs have literally (1980s class) supercomputer performance for GFLOPS/s + GBys RAM. "Enthusiast" GPUs give XX TFLOP class peak performance. Even "slow" LAN networking is Gb/s class as a base level. Pretty much the smallest spinning disc you can buy is 2TBy in size for $70 or whatever. But given all that "capability" it is pretty amazing how underutilized it is relative to providing high availability / high throughput / high "value add" local IT resources. Having things like fully local offline instances of OSM, WP, project Gutenberg, heck the entire "dead tree" form information content of a good sized city's library would be something close to a drop in the bucket to store / process given such a modest system's technological capabilities. But it is not done except for very dedicated institutions. Information searches get offloaded to cloud vendors like Google but you probably have to be an IT sysadmin or running one of the very best in class local OS / IT solutions to even be able to do something comparable locally.

        And with respect to network acceleration, of course it is truly amazing just how much "waste" there is in that missed opportunity. Every "web 2.0" web page one loads probably has something close to 250kBy of "dynamic but really mostly static" content -- images, scripts, markup, style sheets, etc. etc. So it is nothing unusual to see a "browser cache" explode into 100s of MBy for a single user session of some hours and RAM usage even get into the 1+ GBy range correspondingly. And of course the cost for loading all that whether DNS data or "semi-static" content through a somewhat high latency and somewhat congested pipe is lots of lost time in the "world wide wait" and also lots of needless network bandwidth and lots of needless local host resource consumption also. So I am sure when you multiply that by 2-200 users / site it is quite significant in resource "opportunity cost".
        There are all these "edge" CDNs that try to speed some of the latency of it up but of course one can do even better by just having more of it local.

        With respect to media caches, yes, that's also a familiar "pain point" as I have run various things like MythTV OTA boxes / media centers and so on and there should certainly be better ways to store / cache / transcode / search / etc. media whether audio, video or whatever.

        • (Score: 2) by cafebabe on Thursday July 06 2017, @02:52AM

          by cafebabe (894) on Thursday July 06 2017, @02:52AM (#535528) Journal

          You should understand the video system in outline after reading:-

          Video would be divided into a number of tiles. This provides a number of advantages and disadvantages. Advantages include parallel decode via GPU and display of arbitrarily large images or video across multiple screens. Disadvantages include decrypting and caching every video tile.

          1Gb/s should be sufficient for a single user not saturating an 1Gb/s link or dozens of users accessing common content cached from a slower link. Decryption and decompression occur at fairly consistent rates. Therefore, tasks can be specified as clock cycles per bit of data processed. Video decode may be an exception because the ability of a GPU to perform work in parallel depends upon common structure in video tiles. For example, if all tiles encode JPEG DCTs then GPU cores all perform DCT calculations on different data. However, if the range of video tile encodings is more diverse then many GPU cores will wait while others perform specific cases. There is also the problem that GPU cores align with screen resolution rather than media resolution. UHD 3840×2160 video is 60×34 tiles at 64×64 pixels per tile. This is 2040 tiles which may require a 400 core GPU to take six passes where the last pass incurs 10% GPU utilization.

          In the case of caching combined with micro-payments, the obvious strategy is to cache everything forever. This is particularly true if the storage cost is cheaper than obtaining another copy of the data. If this is not possible then it may be beneficial to flush the cheapest content first.

          --
          1702845791×2
        • (Score: 2) by cafebabe on Tuesday July 11 2017, @11:18PM

          by cafebabe (894) on Tuesday July 11 2017, @11:18PM (#537837) Journal

          More articles:-

          --
          1702845791×2
        • (Score: 2) by cafebabe on Sunday July 16 2017, @01:23AM (1 child)

          by cafebabe (894) on Sunday July 16 2017, @01:23AM (#539731) Journal

          More articles:-

          --
          1702845791×2
          • (Score: 2, Informative) by xyz on Sunday July 16 2017, @05:17AM

            by xyz (6633) on Sunday July 16 2017, @05:17AM (#539808)

            Thank you. You're on a great roll with producing excellent quality, insightful, deep, and interesting articles / summaries.
            You have obviously put a great deal of thought and research into these topic areas. Some of the areas and contextual material I'm much more familiar with than others but I'm following along and am in agreement. Many of the noted deficiencies about current systems / practices are all to familiar to me while others were surprising but interesting / informative to learn about in greater depth. Altogether there is certainly a lot of opportunity to do things better / more efficiently / more usably in the realm of data / media transmission and even moreso information and media indexing, catalogging, metadata management, and content aware discovery / transmission / cacheing / storage / utilization / derivation. "Library science" principles and "semantic web / media" ones are vastly underutilized to the point of virtual nonexistence in both the internet and intranet these days. You eloquently highlighted one major deficiency (which I often lament) in aptly pointing out that commonly relevant and "highly usable" schemes to "name" / designate, and also locate articles of information online are fundamently absent, or at least severely limited in their ubiquity / usability. It was refreshing to see project Xanadu mentioned as well as other concepts for referring to resources by their identity (e.g. DOI, hash, URI, whatever) as well as by additional dimensionalities (X, Y, temporal offset, logical subdivisions,....) and how those referential operations could practically efficiencly interact with existing 'de facto' access / query facilities like DNS, UDP, TCP. Of course other data / metadata, categorical, representational, transport, encoding and query systems exist such as XML, SGML, XQUERY, XPATH, SQL, Dublin Core, Gopher, WAIS, et. al. but at the end of the day "none of the above" is really what has been the de facto choice. So in this NATed IPv4 world even hosts don't commonly have useful names / addresses wrt. DNS. What use could be made of key / identity oriented systems (SSO, OAUTH, personal certificates, personal GPG, ubiquitous host / server certificates, et. al.) has been very slow and limited in its growth and promulgation. Similarly the "usual" capabilities of DNS have become increasingly underutilized and extensions thereof even moreso.
            People even settle en masse for personal email and web addresses that aren't truly "their own" and delegate EVERYTHING (email, document hosting / sharing / editing, messaging, telephony, service hosting, ...) to "here today gone tomorrow" cloud based entities which offer NOTHING of persistent value. Not names/identities (email address, domain name). Not locations (URL stability). Not data storage. Not anything. So in this nameless, metadataless, ephemeral vs. persistent, data-ful but information-less world we live.

            In any case starting with A/V media as one emphasis as you have been delving deeply into is a good idea because it is something concrete. Because it is something "bulky" and therefore especially relevant to considerations of efficient location / distribution / transmission / storage. Because it is something more "abstract" in some ways (it is generally human made and not machine made. It is something which does not so easily admit machine based "understanding" / NLP / interpretation. It is something with concrete enough attributes and dimensionalities (e.g. temporal and sectional offsets), a "simple" set of relevant metadata attributes does a good job at at least basically categorizing many aspects of its nature (length, title, subject, contained A/V languages, author, location, sub-stream metadata, ...). It is something that you benefit greatly or essentially from having a computer mediate your access to (CODECs, seeking, searching, format conversion, smart streaming / cacheing). It is bulky enough and resource intensive enough that smart decisions about encoding / decoding / server location / algorithms / cache / mirroring / backup) pay off.

            It is certainly one of the "pain points" of the current internet as visible in the UX. "Please wait...buffering...", "Server not found", "Jitter / freeze", unresponsive pause / play / seek / speed controls, practically nonexistent metadata / information / content management tools (search, filter, translate, subtitle, transcode, cache control, ANNOTATE, EXCERPT, CITE, ...). No particularly smart and effective use is made of real time transcoding or VBR or alternative streaming at least in a way that apparently is functional. Or at least it is often perceived to fail to function even if the heuristic intelligence of those things is unappreciated and unappreciable "when it works seamlessly" in the UX.

            As you said some of that would probably be better if everything wasn't just shoved over TCP with the local client being treated as a "dumb client" barely different than a "TV receiver" to maximize remote control & monitoring at the expense of the UX and the ability to effectively use the information with a smart local "own edge" server / client.

            Of course the ubiquitous enemies of "free/open internet" and net neutrality seek to monitor, throttle, deprioritize, block, translate, transform your content / media streams at their will based upon their intercepted data / metadata / subject / origin / destination / protocol and on the other hand for partly opposing but unfortunately not simply antithetical reasons other interests seek to make it all opaque, encrypted, non-standard in protocol / format / encoding / metadata. And the users (or less powerful providers) too often end up with the disadvantages of all possible "approaches". None of that is favorable to the open and standardizes implementation of endpoint or path resident cacheing, multicast, translation, useful QoS provisioning, et. al.

            Nevertheless as you've well illustrated inside the present "tower of babylon" of data overload / chaos, information underload, and protocol chaos / insufficiency it is an opportunity rich environment for added efficiencies and better UXs at all levels.

            So let's see what can be done. I will continue reading and reflecting upon all that has been said in detail and in breadth.

(1)