Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday March 04 2015, @07:27PM   Printer-friendly
from the over-to-you dept.

What free software is there in the way of organizing lots of documents?

To be more precise, the ones I *need* to organize are the files on hard drives, though if I could include documents I have elsewhere (bookshelves and photocopy files) I wouldn't mind. They are text documents in a variety of file formats and languages, source code for current and obsolete systems, jpeg images, film clips, drawings, SVG files, files, object code, shared libraries, fragments of drafts of books, ragged software documentation, works in progress ...

Of course the files are already semi-organized in directories, but I haven't yet managed to find a suitable collection of directory names. Hierarchical classification isn't ideal -- there are files that fit in several categories, and there are a lot files that have to be in a particular location because of the way they are used (executables in a bin directory, for example) or the way they are updated or maintained. Taxonomists would advise setting up a controlled vocabulary of tags and attaching tags to the various files. I'd end up with a triples store or some other database describing files.

More down the page...

But how to identify the files being tagged? A file-system pathname isn't enough. Files get moved, and sometimes entire directory trees full of files get moved from one place to another for various pragmatic reasons. And a hashcode isn't enough. Files get edited, upgraded, recompiled, reformatted, converted from JIS code to UTF-8, and so forth. Images get cropped and colour-corrected. And under these changes they should keep their assigned classification tags.

Now a number of file formats can accommodate metadata. And some software that manipulates files can preserve metadata and even allow user editing of the metadata. But more doesn't.

Much of it could perhaps be done by automatic content analysis. Other material may require labour-intensive manual classification. Now I don't expect to see any off-the-shelf solution for all of this, but does anyone have ideas as to how to accomplish even some of this? Even poorly? Does anyone know of relevant practical tools? Or have ideas towards tools that *should* exist but currently don't? I'm ready to experiment.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Immerman on Thursday March 05 2015, @08:27PM

    by Immerman (3985) on Thursday March 05 2015, @08:27PM (#153639)

    Quite. Most everything else I've tried uses the "enter terms then initiate search" interaction, which doesn't begin to compare for ease-of-use on a regular basis - I rarely even use a file browser anymore.

    Given the speed of search - apparently instant, even when the initial list shows a quarter-million entries and multiple word fragments are used (though admittedly by the time you've finalized the first fragment the list has already been reduced dramatically), my first instinct would be that it uses an optimized version of a traditional sparse-matrix indexing scheme, with every file listed under all possible fragments (MyFile gets indexed under MyFile, yFile, File, ile, le, and e) But then indexing schemes were never really my forte. A grep-style scan over hundreds of thousands of entries should (I would think) take at a decent fraction of a second on a slow machine, but I've never noticed any lag at all. Though presuming that each new character is only searched for within the results of the previous step would reduce that significantly after the first couple characters are entered.

    Hmm, let's see if we can find some hints - on my current system it's listing 144,000 files, with a memory usage (under WINE, not sure how that might effect things) of 15.2MiB, and a database size of 2.0MB So that's a maximum average of ~15 bytes per file in the database, and 108 in the live index, with only a fraction of a second required to build the index from the database (which when opened in a hex editor appears to be full of fragmentary file names interspersed with binary data). My guess would be it's using a variation of the traditional text index where MyFile gets indexed under MyFile, yFile, File, ile,le, and e, but I'm well outside my area of competency

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2