Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday July 26 2018, @10:01PM   Printer-friendly
from the podcast-me-obiwan-kenobi dept.

Does anyone out there have a favorite Linux program for downloading podcasts? I've been using Chess Griffin's mashpodder but (a) it's now abandonware, and (b) due to the way it identifies files, it doesn't work with modern podcasts where the base name of the file is always "media.mp3" and the earlier parts of the URL change. As such, I'm looking for a replacement, preferably something that I can run as a cron job so that it fires every day without any intervention on my part and where the configuration lives in a file that I can edit with a simple text editor like vim. I'm considering rolling my own in Python just to get more experience with that language, but I thought I'd see if any Soylentils had suggestions for me to check out before I went to the effort of doing that.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by The Mighty Buzzard on Thursday July 26 2018, @10:32PM (7 children)

    I'd roll my own but that's just the kind of guy I am. It'd be dirt simple if they have RSS feeds of the podcasts.

    --
    My rights don't end where your fear begins.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 5, Insightful) by Arik on Thursday July 26 2018, @11:18PM (2 children)

    by Arik (4543) on Thursday July 26 2018, @11:18PM (#713438) Journal
    You'll never have a bug report ignored, and while it's possible your feature requests turn out to take too much time, at least you'll be kept informed.
    --
    If laughter is the best medicine, who are the best doctors?
    • (Score: 2) by epitaxial on Friday July 27 2018, @12:50PM (1 child)

      by epitaxial (3165) on Friday July 27 2018, @12:50PM (#713644)

      Oh yeah sure let me sign up for some local comp.sci courses first....

      • (Score: 2) by Arik on Friday July 27 2018, @02:09PM

        by Arik (4543) on Friday July 27 2018, @02:09PM (#713675) Journal
        What for? The information you want is online, available at your convenience, to suit your schedule, without any other students to distract you and without any grouchy professor to make the subject difficult.
        --
        If laughter is the best medicine, who are the best doctors?
  • (Score: 2) by MichaelDavidCrawford on Friday July 27 2018, @02:23AM (3 children)

    I once tried to build The Siterip Toolkit on top of wget, but eventually concluded that for it to really work I'd have to write native code that links to libcurl.

    On the bright side, I was able to download a vast quantity of pr0n, it's just that I was never able to download all the pr0n on any of my test websites.

    --
    Yes I Have No Bananas. [gofundme.com]
    • (Score: 2) by The Mighty Buzzard on Friday July 27 2018, @10:34AM (2 children)

      While obviously an upstanding motivation, I dislike scraping pages when we now have APIs or feeds as alternative data sources.

      --
      My rights don't end where your fear begins.
      • (Score: 2) by MichaelDavidCrawford on Friday July 27 2018, @11:27AM

        However it does enable using one's mouse to download its videos.

        --
        Yes I Have No Bananas. [gofundme.com]
      • (Score: 2) by MichaelDavidCrawford on Friday July 27 2018, @11:35AM

        That will happen when I introduce some automation to the site.

        There will be different kinds of feeds: every new company anywhere on the planet, new companies for each country as well as for the European Union, new companies for each city and so on.

        My much older and admittedly far-wiser mentor upon growing weary of urging me to use PHP and MySQL offered to import the entire site into MySQL for me, but after thinking it over I'm actually going to decline his offer.

        Instead I'll use a combination of Subversion with a multiply-redundant offsite SVN repository backup that's rolled after each commit.

        All of the automated tools I will use to edit the site will run locally on my machine, which is also set up so that "jobs.velvet" serves my Mac's local copy of the site; that enables Server-Side Includes to work locally.

        My repo will be on my server. A post-commit hook will check out the tree to a staging host. A script that requires root will check out the tree to the live site.

        My very first tool will be some manner of tool that enables me to enter all the locations of a given company all at once. That specification will be a text file at first but later I'll do a GUI.

        If I had just that my rate of publishing new locations would multiply by one or two orders of magnitude. Consider that Oracle isn't a database publisher, it's a consultancy.

        --
        Yes I Have No Bananas. [gofundme.com]